Designing Meaningful AI Transparency and Data Control Features for User Safety
Challenge: How can we help users develop trust and feel safer when interacting with Ads on Google platforms? What is the role of data controls and ads AI transparency features in making users feel safe?
Approach: Led three qualitative studies including interviews, card sorting, and diary studies with 55+ participants across 3 countries to understand user safety needs and preferences.
Outcome: Research directly informed the first design of the Google Ad Center, which is now a public tool used by millions of users.
In this work, I led three qualitative studies to support the design of meaningful Ads AI transparency and data control features. I worked towards promoting user safety and trust in the open internet by asking and answering the following research questions:
To answer the first research question, I conducted two interview studies and used card sorting techniques with 30+ users spanning 3 countries. I single-handedly defined recruitment criteria, developed study plan, conducted interview studies, performed data analysis and communicated findings through reports and presentations.
To answer the second research question, I collaborated with dScout to conduct a week-long diary study with 25+ participants. Users were prompted to think about their interactions with ads multiple times a day and fill out a short survey after those interactions.
I am unable to share my research findings due to confidentiality agreements. My work ultimately supported the first design of the Google Ad Center, which is now a public tool.
Key Contributions: